We need to find a range of values to appropriately label events of concern.
We use,
as a surrogate to label a concern event.
Conclusions:
where \(\bf{Pc\_warn}\) is the Collision Probability that triggers a warning.
We need to evaluate warning thresholds by examining the trade space between risk aversion and tolerance.
We have the following working definitions:
\(\hspace{2.5cm}\bf{\text{False Negative (FN)} := \text{# of Concern Events in a year that did not trigger a warning}}\)
\(\hspace{2.5cm}\bf{\text{False Positive (FP)} := \text{# of False Alarms in a year}}\).
We can now explore how the warning threshold affects the \(\text{FP}\) and \(\text{FN}\).
Taking the optimal warning thresholds for each fragment size and days to TCA, we evaluate the performance on random data samples, and computing the 95% Confidence Interval for FN and FP counts. The boxes indicate the 95% bounds and the dot is the observed performance.